预计变形量子算法将展示量子计算在近期嘈杂量子计算机上的优点。然而,由于算法的大小增加,训练这种变分量子算法遭受梯度消失。以前的工作无法处理由现实量子硬件的必然噪声效应引起的渐变消失。在本文中,我们提出了一种新颖的培训方案,以减轻这种噪声引起的渐变消失。我们首先介绍一种新的成本函数,其中通过在截断的子空间中使用无意程可观察来显着增强梯度。然后,我们证明可以通过从新的成本函数与梯度优化原始成本函数来达到相同的最小值。实验表明,我们的新培训方案对于各种任务的主要变分量子算法非常有效。
translated by 谷歌翻译
预计变形量子算法将展示量子计算在近期嘈杂量子计算机上的优点。然而,由于算法的大小增加,训练这种变分量子算法遭受梯度消失。以前的工作无法处理由现实量子硬件的必然噪声效应引起的渐变消失。在本文中,我们提出了一种新颖的培训方案,以减轻这种噪声引起的渐变消失。我们首先介绍一种新的成本函数,其中通过在截断的子空间中使用无意程可观察来显着增强梯度。然后,我们证明可以通过从新的成本函数与梯度优化原始成本函数来达到相同的最小值。实验表明,我们的新培训方案对于各种任务的主要变分量子算法非常有效。
translated by 谷歌翻译
Automated plot generation is the challenge of generating a sequence of events that will be perceived by readers as the plot of a coherent story. Traditional symbolic planners plan a story from a goal state and guarantee logical causal plot coherence but rely on a library of hand-crafted actions with their preconditions and effects. This closed world setting limits the length and diversity of what symbolic planners can generate. On the other hand, pre-trained neural language models can generate stories with great diversity, while being generally incapable of ending a story in a specified manner and can have trouble maintaining coherence. In this paper, we present an approach to story plot generation that unifies causal planning with neural language models. We propose to use commonsense knowledge extracted from large language models to recursively expand a story plot in a backward chaining fashion. Specifically, our system infers the preconditions for events in the story and then events that will cause those conditions to become true. We performed automatic evaluation to measure narrative coherence as indicated by the ability to answer questions about whether different events in the story are causally related to other events. Results indicate that our proposed method produces more coherent plotlines than several strong baselines.
translated by 谷歌翻译
Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences. Existing methods to control for story preference utilize prompt engineering which is labor intensive and often inconsistent. They may also use logit-manipulation methods which require annotated datasets to exist for the desired attributes. To address these issues, we first train a contrastive bi-encoder model to align stories with corresponding human critiques, named CARP, building a general purpose preference model. This is subsequently used as a reward function to fine-tune a generative language model via reinforcement learning. However, simply fine-tuning a generative language model with a contrastive reward model does not always reliably result in a story generation system capable of generating stories that meet user preferences. To increase story generation robustness we further fine-tune the contrastive reward model using a prompt-learning technique. A human participant study is then conducted comparing generations from our full system, ablations, and two baselines. We show that the full fine-tuning pipeline results in a story generator preferred over a LLM 20x as large as well as logit-based methods. This motivates the use of contrastive learning for general purpose human preference modeling.
translated by 谷歌翻译
现在,基于视觉的本地化方法为来自机器人技术到辅助技术的无数用例提供了新出现的导航管道。与基于传感器的解决方案相比,基于视觉的定位不需要预安装的传感器基础架构,这是昂贵,耗时和/或通常不可行的。本文中,我们为特定用例提出了一个基于视觉的本地化管道:针对失明和低视力的最终用户的导航支持。给定最终用户在移动应用程序上拍摄的查询图像,该管道利用视觉位置识别(VPR)算法在目标空间的参考图像数据库中找到相似的图像。这些相似图像的地理位置用于采用加权平均方法来估计最终用户的位置和透视N点(PNP)算法的下游任务中,以估计最终用户的方向。此外,该系统实现了Dijkstra的算法,以根据包括Trip Origin和目的地的可通航地图计算最短路径。用于本地化和导航的层压映射是使用定制的图形用户界面构建的,该图形用户界面投影了3D重建的稀疏映射,从一系列图像构建到相应的先验2D楼平面图。用于地图构造的顺序图像可以在预映射步骤中收集,也可以通过公共数据库/公民科学清除。端到端系统可以使用带有自定义移动应用程序的相机安装在任何可互联网的设备上。出于评估目的,在复杂的医院环境中测试了映射和定位。评估结果表明,我们的系统可以以少于1米的平均误差来实现本地化,而无需了解摄像机的固有参数,例如焦距。
translated by 谷歌翻译
在每个卷积层中学习一个静态卷积内核是现代卷积神经网络(CNN)的常见训练范式。取而代之的是,动态卷积的最新研究表明,学习$ n $卷积核与输入依赖性注意的线性组合可以显着提高轻重量CNN的准确性,同时保持有效的推断。但是,我们观察到现有的作品endow卷积内核具有通过一个维度(关于卷积内核编号)的动态属性(关于内核空间的卷积内核编号),但其他三个维度(关于空间大小,输入通道号和输出通道编号和输出通道号,每个卷积内核)被忽略。受到这一点的启发,我们提出了Omni维动态卷积(ODCONV),这是一种更普遍而优雅的动态卷积设计,以推进这一研究。 ODCONV利用了一种新型的多维注意机制,采用平行策略来学习沿着任何卷积层的内核空间的所有四个维度学习卷积内核的互补关注。作为定期卷积的倒数替换,可以将ODCONV插入许多CNN架构中。 ImageNet和MS-Coco数据集的广泛实验表明,ODCONV为包括轻量重量和大型的各种盛行的CNN主链带来了可靠的准确性提升,例如3.77%〜5.71%| 1.86%〜3.72%〜3.72%的绝对1个绝对1改进至ImabivLenetV2 | ImageNet数据集上的重新连接家族。有趣的是,由于其功能学习能力的提高,即使具有一个单个内核的ODCONV也可以与具有多个内核的现有动态卷积对应物竞争或超越现有的动态卷积对应物,从而大大降低了额外的参数。此外,ODCONV也优于其他注意模块,用于调节输出特征或卷积重量。
translated by 谷歌翻译
最近,深层神经网络(DNNS)用于减少带宽并提高互联网视频交付的质量。现有的方法训练服务器上每个视频块的相应内容超级分辨率(SR)模型,并将低分辨率(LR)视频块以及SR模型一起流到客户端。尽管他们取得了令人鼓舞的结果,但网络培训的巨大计算成本限制了其实际应用。在本文中,我们提出了一种名为有效元调整(EMT)的方法,以降低计算成本。 EMT没有从头开始训练,而是将元学习的模型适应了输入视频的第一部分。至于以下块,它通过以前的改编模型的梯度掩盖选择了部分参数。为了实现EMT的进一步加速,我们提出了一种新颖的抽样策略,以从视频帧中提取最具挑战性的补丁。拟议的策略高效,带来了可忽略的额外成本。我们的方法大大降低了计算成本并取得更好的性能,为将神经视频传递技术应用于实际应用铺平了道路。我们基于各种有效的SR架构进行了广泛的实验,包括ESPCN,SRCNN,FSRCNN和EDSR-1,证明了我们工作的概括能力。该代码通过\ url {https://github.com/neural-video-delivery/emt-pytorch-eccv2022}发布。
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译